首页> 外文OA文献 >A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks
【2h】

A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks

机译:联合多任务模型:为多个NLp任务增长神经网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Transfer and multi-task learning have traditionally focused on either asingle source-target pair or very few, similar tasks. Ideally, the linguisticlevels of morphology, syntax and semantics would benefit each other by beingtrained in a single model. We introduce a joint many-task model together with astrategy for successively growing its depth to solve increasingly complextasks. Higher layers include shortcut connections to lower-level taskpredictions to reflect linguistic hierarchies. We use a simple regularizationterm to allow for optimizing all model weights to improve one task's losswithout exhibiting catastrophic interference of the other tasks. Our singleend-to-end model obtains state-of-the-art or competitive results on fivedifferent tasks from tagging, parsing, relatedness, and entailment tasks.
机译:传统上,转移和多任务学习一直专注于单个源-目标对或很少量的相似任务。理想地,通过在单个模型中进行训练,形态,语法和语义的语言水平将彼此受益。我们将联合多任务模型与策略一起引入,以逐步扩大其深度,以解决日益复杂的任务。较高的层包括到较低级任务预测的快捷连接,以反映语言层次结构。我们使用一个简单的正则化项来优化所有模型权重,以改善一个任务的损失,而不会表现出其他任务的灾难性干扰。我们的单端到端模型从标记,解析,关联和包含任务中的五个不同任务中获得了最新的或竞争性的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号